Probability density function |
|
Cumulative distribution function |
|
Parameters | deg. of freedom |
---|---|
Support | |
CDF | |
Mean | for |
Mode | for |
Variance | for |
Skewness | for |
Ex. kurtosis | see text |
MGF | does not exist, raw moments defined in text and in [1][2] |
CF | see text |
In probability theory and statistics, the F-distribution is a continuous probability distribution.[1][2][3][4] It is also known as Snedecor's F distribution or the Fisher-Snedecor distribution (after R.A. Fisher and George W. Snedecor). The F-distribution arises frequently as the null distribution of a test statistic, most notably in the analysis of variance; see F-test.
Contents |
If a random variable has an F-distribution with parameters and , we write . Then the probability density function for is given by
for real . Here is the beta function. In many applications, the parameters and are positive integers, but the distribution is well-defined for positive real values of these parameters.
The cumulative distribution function is
where I is the regularized incomplete beta function.
The expectation, variance, and other details about the are given in the sidebox; for , the excess kurtosis is
The k-th moment of an distribution exists and is finite only when and it is equal to [5]:
The F-distribution is a particular parametrization of the beta prime distribution, which is also called the beta distribution of the second kind.
The characteristic function is listed incorrectly in many standard references (e.g., [2]). The correct expression [6] is
where is the confluent hypergeometric function of the second kind.
A random variate of the F-distribution with parameters d1 and d2 arises as the ratio of two appropriately scaled chi-squared variates:
where
In instances where the F-distribution is used, for instance in the analysis of variance, independence of U1 and U2 might be demonstrated by applying Cochran's theorem.
A generalization of the (central) F-distribution is the noncentral F-distribution.
|